Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
bioRxiv ; 2023 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-37790457

RESUMO

The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues. Here, we circumnavigated this limitation by leveraging individual differences across 200 participants to systematically compare variations in TFS sensitivity to performance in a range of speech perception tasks. Results suggest that robust TFS sensitivity does not confer additional masking release from pitch or spatial cues, but appears to confer resilience against the effects of reverberation. Yet, across conditions, we also found that greater TFS sensitivity is associated with faster response times, consistent with reduced listening effort. These findings highlight the perceptual significance of TFS coding for everyday hearing.

2.
Commun Biol ; 6(1): 981, 2023 09 26.
Artigo em Inglês | MEDLINE | ID: mdl-37752215

RESUMO

The auditory system has exquisite temporal coding in the periphery which is transformed into a rate-based code in central auditory structures, like auditory cortex. However, the cortex is still able to synchronize, albeit at lower modulation rates, to acoustic fluctuations. The perceptual significance of this cortical synchronization is unknown. We estimated physiological synchronization limits of cortex (in humans with electroencephalography) and brainstem neurons (in chinchillas) to dynamic binaural cues using a novel system-identification technique, along with parallel perceptual measurements. We find that cortex can synchronize to dynamic binaural cues up to approximately 10 Hz, which aligns well with our measured limits of perceiving dynamic spatial information and utilizing dynamic binaural cues for spatial unmasking, i.e. measures of binaural sluggishness. We also find that the tracking limit for frequency modulation (FM) is similar to the limit for spatial tracking, demonstrating that this sluggish tracking is a more general perceptual limit that can be accounted for by cortical temporal integration limits.


Assuntos
Córtex Auditivo , Percepção do Tempo , Humanos , Acústica , Tronco Encefálico , Sincronização Cortical
3.
J Neurosci Methods ; 398: 109954, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37625650

RESUMO

BACKGROUND: Disabling hearing loss affects nearly 466 million people worldwide (World Health Organization). The auditory brainstem response (ABR) is the most common non-invasive clinical measure of evoked potentials, e.g., as an objective measure for universal newborn hearing screening. In research, the ABR is widely used for estimating hearing thresholds and cochlear synaptopathy in animal models of hearing loss. The ABR contains multiple waves representing neural activity across different peripheral auditory pathway stages, which arise within the first 10 ms after stimulus onset. Multi-channel (e.g., 32 or higher) caps provide robust measures for a wide variety of EEG applications for the study of human hearing. However, translational studies using preclinical animal models typically rely on only a few subdermal electrodes. NEW METHOD: We evaluated the feasibility of a 32-channel rodent EEG mini-cap for improving the reliability of ABR measures in chinchillas, a common model of human hearing. RESULTS: After confirming initial feasibility, a systematic experimental design tested five potential sources of variability inherent to the mini-cap methodology. We found each source of variance minimally affected mini-cap ABR waveform morphology, thresholds, and wave-1 amplitudes. COMPARISON WITH EXISTING METHOD: The mini-cap methodology was statistically more robust and less variable than the conventional subdermal-needle methodology, most notably when analyzing ABR thresholds. Additionally, fewer repetitions were required to produce a robust ABR response when using the mini-cap. CONCLUSIONS: These results suggest the EEG mini-cap can improve translational studies of peripheral auditory evoked responses. Future work will evaluate the potential of the mini-cap to improve the reliability of more centrally evoked (e.g., cortical) EEG responses.


Assuntos
Surdez , Perda Auditiva , Animais , Recém-Nascido , Humanos , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Chinchila , Ruído , Reprodutibilidade dos Testes , Limiar Auditivo/fisiologia , Perda Auditiva/diagnóstico , Eletroencefalografia , Estimulação Acústica
4.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 14052-14054, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37402186

RESUMO

A recent paper claims that a newly proposed method classifies EEG data recorded from subjects viewing ImageNet stimuli better than two prior methods. However, the analysis used to support that claim is based on confounded data. We repeat the analysis on a large new dataset that is free from that confound. Training and testing on aggregated supertrials derived by summing trials demonstrates that the two prior methods achieve statistically significant above-chance accuracy while the newly proposed method does not.

5.
Behav Res Methods ; 2023 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-37326771

RESUMO

Anonymous web-based experiments are increasingly used in many domains of behavioral research. However, online studies of auditory perception, especially of psychoacoustic phenomena pertaining to low-level sensory processing, are challenging because of limited available control of the acoustics, and the inability to perform audiometry to confirm normal-hearing status of participants. Here, we outline our approach to mitigate these challenges and validate our procedures by comparing web-based measurements to lab-based data on a range of classic psychoacoustic tasks. Individual tasks were created using jsPsych, an open-source JavaScript front-end library. Dynamic sequences of psychoacoustic tasks were implemented using Django, an open-source library for web applications, and combined with consent pages, questionnaires, and debriefing pages. Subjects were recruited via Prolific, a subject recruitment platform for web-based studies. Guided by a meta-analysis of lab-based data, we developed and validated a screening procedure to select participants for (putative) normal-hearing status based on their responses in a suprathreshold task and a survey. Headphone use was standardized by supplementing procedures from prior literature with a binaural hearing task. Individuals meeting all criteria were re-invited to complete a range of classic psychoacoustic tasks. For the re-invited participants, absolute thresholds were in excellent agreement with lab-based data for fundamental frequency discrimination, gap detection, and sensitivity to interaural time delay and level difference. Furthermore, word identification scores, consonant confusion patterns, and co-modulation masking release effect also matched lab-based studies. Our results suggest that web-based psychoacoustics is a viable complement to lab-based research. Source code for our infrastructure is provided.

6.
Sci Rep ; 13(1): 10216, 2023 06 23.
Artigo em Inglês | MEDLINE | ID: mdl-37353552

RESUMO

Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Percepção da Fala/fisiologia , Ruído , Percepção Auditiva , Eletroencefalografia
7.
bioRxiv ; 2023 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-36712081

RESUMO

Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.

8.
Commun Biol ; 5(1): 733, 2022 07 22.
Artigo em Inglês | MEDLINE | ID: mdl-35869142

RESUMO

Animal models suggest that cochlear afferent nerve endings may be more vulnerable than sensory hair cells to damage from acoustic overexposure and aging. Because neural degeneration without hair-cell loss cannot be detected in standard clinical audiometry, whether such damage occurs in humans is hotly debated. Here, we address this debate through co-ordinated experiments in at-risk humans and a wild-type chinchilla model. Cochlear neuropathy leads to large and sustained reductions of the wideband middle-ear muscle reflex in chinchillas. Analogously, human wideband reflex measures revealed distinct damage patterns in middle age, and in young individuals with histories of high acoustic exposure. Analysis of an independent large public dataset and additional measurements using clinical equipment corroborated the patterns revealed by our targeted cross-species experiments. Taken together, our results suggest that cochlear neural damage is widespread even in populations with clinically normal hearing.


Assuntos
Cóclea , Células Ciliadas Auditivas , Estimulação Acústica , Animais , Chinchila , Células Ciliadas Auditivas/fisiologia , Audição , Humanos , Pessoa de Meia-Idade
9.
eNeuro ; 9(2)2022.
Artigo em Inglês | MEDLINE | ID: mdl-35193890

RESUMO

Neural phase-locking to temporal fluctuations is a fundamental and unique mechanism by which acoustic information is encoded by the auditory system. The perceptual role of this metabolically expensive mechanism, the neural phase-locking to temporal fine structure (TFS) in particular, is debated. Although hypothesized, it is unclear whether auditory perceptual deficits in certain clinical populations are attributable to deficits in TFS coding. Efforts to uncover the role of TFS have been impeded by the fact that there are no established assays for quantifying the fidelity of TFS coding at the individual level. While many candidates have been proposed, for an assay to be useful, it should not only intrinsically depend on TFS coding, but should also have the property that individual differences in the assay reflect TFS coding per se over and beyond other sources of variance. Here, we evaluate a range of behavioral and electroencephalogram (EEG)-based measures as candidate individualized measures of TFS sensitivity. Our comparisons of behavioral and EEG-based metrics suggest that extraneous variables dominate both behavioral scores and EEG amplitude metrics, rendering them ineffective. After adjusting behavioral scores using lapse rates, and extracting latency or percent-growth metrics from EEG, interaural timing sensitivity measures exhibit robust behavior-EEG correlations. Together with the fact that unambiguous theoretical links can be made relating binaural measures and phase-locking to TFS, our results suggest that these "adjusted" binaural assays may be well suited for quantifying individual TFS processing.


Assuntos
Percepção Auditiva , Estimulação Acústica/métodos , Humanos
10.
Ear Hear ; 43(3): 849-861, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34751679

RESUMO

OBJECTIVES: Despite the widespread use of noise reduction (NR) in modern digital hearing aids, our neurophysiological understanding of how NR affects speech-in-noise perception and why its effect is variable is limited. The current study aimed to (1) characterize the effect of NR on the neural processing of target speech and (2) seek neural determinants of individual differences in the NR effect on speech-in-noise performance, hypothesizing that an individual's own capability to inhibit background noise would inversely predict NR benefits in speech-in-noise perception. DESIGN: Thirty-six adult listeners with normal hearing participated in the study. Behavioral and electroencephalographic responses were simultaneously obtained during a speech-in-noise task in which natural monosyllabic words were presented at three different signal-to-noise ratios, each with NR off and on. A within-subject analysis assessed the effect of NR on cortical evoked responses to target speech in the temporal-frontal speech and language brain regions, including supramarginal gyrus and inferior frontal gyrus in the left hemisphere. In addition, an across-subject analysis related an individual's tolerance to noise, measured as the amplitude ratio of auditory-cortical responses to target speech and background noise, to their speech-in-noise performance. RESULTS: At the group level, in the poorest signal-to-noise ratio condition, NR significantly increased early supramarginal gyrus activity and decreased late inferior frontal gyrus activity, indicating a switch to more immediate lexical access and less effortful cognitive processing, although no improvement in behavioral performance was found. The across-subject analysis revealed that the cortical index of individual noise tolerance significantly correlated with NR-driven changes in speech-in-noise performance. CONCLUSIONS: NR can facilitate speech-in-noise processing despite no improvement in behavioral performance. Findings from the current study also indicate that people with lower noise tolerance are more likely to get more benefits from NR. Overall, results suggest that future research should take a mechanistic approach to NR outcomes and individual noise tolerance.


Assuntos
Auxiliares de Audição , Percepção da Fala , Adulto , Humanos , Ruído , Razão Sinal-Ruído , Fala , Percepção da Fala/fisiologia
11.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9217-9220, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34665721

RESUMO

Neuroimaging experiments in general, and EEG experiments in particular, must take care to avoid confounds. A recent TPAMI paper uses data that suffers from a serious previously reported confound. We demonstrate that their new model and analysis methods do not remedy this confound, and therefore that their claims of high accuracy and neuroscience relevance are invalid.


Assuntos
Algoritmos , Mapeamento Encefálico , Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Neuroimagem , Aprendizagem , Eletroencefalografia/métodos
12.
J Acoust Soc Am ; 150(3): 2230, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598642

RESUMO

A fundamental question in the neuroscience of everyday communication is how scene acoustics shape the neural processing of attended speech sounds and in turn impact speech intelligibility. While it is well known that the temporal envelopes in target speech are important for intelligibility, how the neural encoding of target-speech envelopes is influenced by background sounds or other acoustic features of the scene is unknown. Here, we combine human electroencephalography with simultaneous intelligibility measurements to address this key gap. We find that the neural envelope-domain signal-to-noise ratio in target-speech encoding, which is shaped by masker modulations, predicts intelligibility over a range of strategically chosen realistic listening conditions unseen by the predictive model. This provides neurophysiological evidence for modulation masking. Moreover, using high-resolution vocoding to carefully control peripheral envelopes, we show that target-envelope coding fidelity in the brain depends not only on envelopes conveyed by the cochlea, but also on the temporal fine structure (TFS), which supports scene segregation. Our results are consistent with the notion that temporal coherence of sound elements across envelopes and/or TFS influences scene analysis and attentive selection of a target sound. Our findings also inform speech-intelligibility models and technologies attempting to improve real-world speech communication.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Acústica , Percepção Auditiva , Humanos , Mascaramento Perceptivo , Razão Sinal-Ruído
13.
Artigo em Inglês | MEDLINE | ID: mdl-33211652

RESUMO

A recent paper [31] claims to classify brain processing evoked in subjects watching ImageNet stimuli as measured with EEG and to employ a representation derived from this processing to construct a novel object classifier. That paper, together with a series of subsequent papers [11, 18, 20, 24, 25, 30, 34], claims to achieve successful results on a wide variety of computer-vision tasks, including object classification, transfer learning, and generation of images depicting human perception and thought using brain-derived representations measured through EEG. Our novel experiments and analyses demonstrate that their results crucially depend on the block design that they employ, where all stimuli of a given class are presented together, and fail with a rapid-event design, where stimuli of different classes are randomly intermixed. The block design leads to classification of arbitrary brain states based on block-level temporal correlations that are known to exist in all EEG data, rather than stimulus-related activity. Because every trial in their test sets comes from the same block as many trials in the corresponding training sets, their block design thus leads to classifying arbitrary temporal artifacts of the data instead of stimulus-related activity. This invalidates all subsequent analyses performed on this data in multiple published papers and calls into question all of the reported results. We further show that a novel object classifier constructed with a random codebook performs as well as or better than a novel object classifier constructed with the representation extracted from EEG data, suggesting that the performance of their classifier constructed with a representation extracted from EEG data does not benefit from the brain-derived representation. Together, our results illustrate the far-reaching implications of the temporal autocorrelations that exist in all neuroimaging data for classification experiments. Further, our results calibrate the underlying difficulty of the tasks involved and caution against overly optimistic, but incorrect, claims to the contrary.

15.
eNeuro ; 6(5)2019.
Artigo em Inglês | MEDLINE | ID: mdl-31585928

RESUMO

The ability to selectively attend to speech in the presence of other competing talkers is critical for everyday communication; yet the neural mechanisms facilitating this process are poorly understood. Here, we use electroencephalography (EEG) to study how a mixture of two speech streams is represented in the brain as subjects attend to one stream or the other. To characterize the speech-EEG relationships and how they are modulated by attention, we estimate the statistical association between each canonical EEG frequency band (delta, theta, alpha, beta, low-gamma, and high-gamma) and the envelope of each of ten different frequency bands in the input speech. Consistent with previous literature, we find that low-frequency (delta and theta) bands show greater speech-EEG coherence when the speech stream is attended compared to when it is ignored. We also find that the envelope of the low-gamma band shows a similar attention effect, a result not previously reported with EEG. This is consistent with the prevailing theory that neural dynamics in the gamma range are important for attention-dependent routing of information in cortical circuits. In addition, we also find that the greatest attention-dependent increases in speech-EEG coherence are seen in the mid-frequency acoustic bands (0.5-3 kHz) of input speech and the temporal-parietal EEG sensors. Finally, we find individual differences in the following: (1) the specific set of speech-EEG associations that are the strongest, (2) the EEG and speech features that are the most informative about attentional focus, and (3) the overall magnitude of attentional enhancement of speech-EEG coherence.


Assuntos
Atenção/fisiologia , Percepção da Fala/fisiologia , Adulto , Córtex Auditivo/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino
16.
Neuroscience ; 407: 53-66, 2019 05 21.
Artigo em Inglês | MEDLINE | ID: mdl-30853540

RESUMO

Studies in multiple species, including in post-mortem human tissue, have shown that normal aging and/or acoustic overexposure can lead to a significant loss of afferent synapses innervating the cochlea. Hypothetically, this cochlear synaptopathy can lead to perceptual deficits in challenging environments and can contribute to central neural effects such as tinnitus. However, because cochlear synaptopathy can occur without any measurable changes in audiometric thresholds, synaptopathy can remain hidden from standard clinical diagnostics. To understand the perceptual sequelae of synaptopathy and to evaluate the efficacy of emerging therapies, sensitive and specific non-invasive measures at the individual patient level need to be established. Pioneering experiments in specific mice strains have helped identify many candidate assays. These include auditory brainstem responses, the middle-ear muscle reflex, envelope-following responses, and extended high-frequency audiograms. Unfortunately, because these non-invasive measures can be also affected by extraneous factors other than synaptopathy, their application and interpretation in humans is not straightforward. Here, we systematically examine six extraneous factors through a series of interrelated human experiments aimed at understanding their effects. Using strategies that may help mitigate the effects of such extraneous factors, we then show that these suprathreshold physiological assays exhibit across-individual correlations with each other indicative of contributions from a common physiological source consistent with cochlear synaptopathy. Finally, we discuss the application of these assays to two key outstanding questions, and discuss some barriers that still remain. This article is part of a Special Issue entitled: Hearing Loss, Tinnitus, Hyperacusis, Central Gain.


Assuntos
Limiar Auditivo/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Individualidade , Zumbido/etiologia , Cóclea/fisiologia , Audição/fisiologia , Perda Auditiva Provocada por Ruído/complicações , Humanos , Sinapses/fisiologia , Zumbido/fisiopatologia
17.
Front Neurosci ; 10: 255, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27375417

RESUMO

Abnormalities in cortical connectivity and evoked responses have been extensively documented in autism spectrum disorder (ASD). However, specific signatures of these cortical abnormalities remain elusive, with data pointing toward abnormal patterns of both increased and reduced response amplitudes and functional connectivity. We have previously proposed, using magnetoencephalography (MEG) data, that apparent inconsistencies in prior studies could be reconciled if functional connectivity in ASD was reduced in the feedback (top-down) direction, but increased in the feedforward (bottom-up) direction. Here, we continue this line of investigation by assessing abnormalities restricted to the onset, feedforward inputs driven, component of the response to vibrotactile stimuli in somatosensory cortex in ASD. Using a novel method that measures the spatio-temporal divergence of cortical activation, we found that relative to typically developing participants, the ASD group was characterized by an increase in the initial onset component of the cortical response, and a faster spread of local activity. Given the early time window, the results could be interpreted as increased thalamocortical feedforward connectivity in ASD, and offer a plausible mechanism for the previously observed increased response variability in ASD, as well as for the commonly observed behaviorally measured tactile processing abnormalities associated with the disorder.

18.
J Neurosci ; 36(13): 3755-64, 2016 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-27030760

RESUMO

Evidence from animal and human studies suggests that moderate acoustic exposure, causing only transient threshold elevation, can nonetheless cause "hidden hearing loss" that interferes with coding of suprathreshold sound. Such noise exposure destroys synaptic connections between cochlear hair cells and auditory nerve fibers; however, there is no clinical test of this synaptopathy in humans. In animals, synaptopathy reduces the amplitude of auditory brainstem response (ABR) wave-I. Unfortunately, ABR wave-I is difficult to measure in humans, limiting its clinical use. Here, using analogous measurements in humans and mice, we show that the effect of masking noise on the latency of the more robust ABR wave-V mirrors changes in ABR wave-I amplitude. Furthermore, in our human cohort, the effect of noise on wave-V latency predicts perceptual temporal sensitivity. Our results suggest that measures of the effects of noise on ABR wave-V latency can be used to diagnose cochlear synaptopathy in humans. SIGNIFICANCE STATEMENT: Although there are suspicions that cochlear synaptopathy affects humans with normal hearing thresholds, no one has yet reported a clinical measure that is a reliable marker of such loss. By combining human and animal data, we demonstrate that the latency of auditory brainstem response wave-V in noise reflects auditory nerve loss. This is the first study of human listeners with normal hearing thresholds that links individual differences observed in behavior and auditory brainstem response timing to cochlear synaptopathy. These results can guide development of a clinical test to reveal this previously unknown form of noise-induced hearing loss in humans.


Assuntos
Orelha Interna/patologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Perda Auditiva Provocada por Ruído/patologia , Ruído , Tempo de Reação/fisiologia , Sinapses/patologia , Estimulação Acústica , Adulto , Animais , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Modelos Animais de Doenças , Eletroencefalografia , Feminino , Perda Auditiva Provocada por Ruído/fisiopatologia , Humanos , Masculino , Camundongos , Emissões Otoacústicas Espontâneas/fisiologia , Adulto Jovem
19.
J Acoust Soc Am ; 138(3): 1637-59, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26428802

RESUMO

Population responses such as the auditory brainstem response (ABR) are commonly used for hearing screening, but the relationship between single-unit physiology and scalp-recorded population responses are not well understood. Computational models that integrate physiologically realistic models of single-unit auditory-nerve (AN), cochlear nucleus (CN) and inferior colliculus (IC) cells with models of broadband peripheral excitation can be used to simulate ABRs and thereby link detailed knowledge of animal physiology to human applications. Existing functional ABR models fail to capture the empirically observed 1.2-2 ms ABR wave-V latency-vs-intensity decrease that is thought to arise from level-dependent changes in cochlear excitation and firing synchrony across different tonotopic sections. This paper proposes an approach where level-dependent cochlear excitation patterns, which reflect human cochlear filter tuning parameters, drive AN fibers to yield realistic level-dependent properties of the ABR wave-V. The number of free model parameters is minimal, producing a model in which various sources of hearing-impairment can easily be simulated on an individualized and frequency-dependent basis. The model fits latency-vs-intensity functions observed in human ABRs and otoacoustic emissions while maintaining rate-level and threshold characteristics of single-unit AN fibers. The simulations help to reveal which tonotopic regions dominate ABR waveform peaks at different stimulus intensities.


Assuntos
Tronco Encefálico/fisiologia , Nervo Coclear/fisiologia , Estimulação Acústica , Membrana Basilar/fisiologia , Ciências Biocomportamentais , Cóclea/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Audição/fisiologia , Humanos , Emissões Otoacústicas Espontâneas/fisiologia , Tempo de Reação/fisiologia , Vibração
20.
Brain Res ; 1626: 146-64, 2015 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-26187756

RESUMO

Auditory brainstem responses (ABRs) and their steady-state counterpart (subcortical steady-state responses, SSSRs) are generally thought to be insensitive to cognitive demands. However, a handful of studies report that SSSRs are modulated depending on the subject׳s focus of attention, either towards or away from an auditory stimulus. Here, we explored whether attentional focus affects the envelope-following response (EFR), which is a particular kind of SSSR, and if so, whether the effects are specific to which sound elements in a sound mixture a subject is attending (selective auditory attentional modulation), specific to attended sensory input (inter-modal attentional modulation), or insensitive to attentional focus. We compared the strength of EFR-stimulus phase locking in human listeners under various tasks: listening to a monaural stimulus, selectively attending to a particular ear during dichotic stimulus presentation, and attending to visual stimuli while ignoring dichotic auditory inputs. We observed no systematic changes in the EFR across experimental manipulations, even though cortical EEG revealed attention-related modulations of alpha activity during the task. We conclude that attentional effects, if any, on human subcortical representation of sounds cannot be observed robustly using EFRs. This article is part of a Special Issue entitled SI: Prediction and Attention.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico , Estimulação Acústica , Adolescente , Adulto , Ritmo alfa , Eletroencefalografia , Feminino , Humanos , Masculino , Espectrografia do Som , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...